18 research outputs found

    Benchmarking of Neuromorphic Hardware Systems

    Get PDF
    Ostrau C, Klarhorst C, Thies M, Rückert U. Benchmarking of Neuromorphic Hardware Systems. In: Neuro-inspired Computational Elements Workshop (NICE ’20), March 17–20, 2020, Heidelberg, Germany. International Conference Proceeding Series (ICPS). Association for Computing Machinery (ACM); 2020.With more and more neuromorphic hardware systems for the accel- eration of spiking neural networks available in science and industry, there is a demand for platform comparison and performance esti- mation of such systems. This work describes selected benchmarks implemented in a framework with exactly this target: independent black-box benchmarking and comparison of platforms suitable for the simulation/emulation of spiking neural networks

    Comparing Neuromorphic Systems by Solving Sudoku Problems

    Get PDF
    Ostrau C, Klarhorst C, Thies M, Rückert U. Comparing Neuromorphic Systems by Solving Sudoku Problems. In: Conference Proceedings: 2019 International Conference on High Performance Computing & Simulation (HPCS). Piscataway, NJ: IEEE; Accepted.In the field of neuromorphic computing several hardware accelerators for spiking neural networks have been introduced, but few studies actually compare different systems. These comparative studies reveal difficulties in porting an existing network to a specific system and in predicting its performance indicators. Finding a common network architecture that is suited for all target platforms and at the same time yields decent results is a major challenge. In this contribution, we show that a winner-takes-all inspired network structure can be employed to solve Sudoku puzzles on three diverse hardware accelerators. By exploring several network implementations, we measured the number of solved puzzles in a set of 100 assorted Sudokus, as well as time and energy to solution. Concerning the last two indicators, our measurements indicate that it can be beneficial to port a network to an analogue hardware system

    Development of Energy Models for Design Space Exploration of Embedded Many-Core Systems

    Full text link
    This paper introduces a methodology to develop energy models for the design space exploration of embedded many-core systems. The design process of such systems can benefit from sophisticated models. Software and hardware can be specifically optimized based on comprehensive knowledge about application scenario and hardware behavior. The contribution of our work is an automated framework to estimate the energy consumption at an arbitrary abstraction level without the need to provide further information about the system. We validated our framework with the configurable many-core system CoreVA-MPSoC. Compared to a simulation of the CoreVA-MPSoC on gate level in a 28nm FD-SOI standard cell technology, our framework shows an average estimation error of about 4%.Comment: Presented at HIP3ES, 201

    Benchmarking and Characterization of event-based Neuromorphic Hardware

    Get PDF
    Ostrau C, Klarhorst C, Thies M, Rückert U. Benchmarking and Characterization of event-based Neuromorphic Hardware. Presented at the FastPath 2019 - International Workshop on Performance Analysis of Machine Learning Systems, Madison, Wisconsin, USA.We present the modular framework SNABSuite (Spiking Neural Architecture Benchmark Suite) for black-box benchmarking of neuromorphic hardware systems and spiking neural network software simulators. The motivation for having a coherent collection of benchmarks is twofold: first, benchmarks evaluated on different platforms provide measures for direct comparison of performance indicators (e.g. resource efficiency, quality of the result, robustness). By using the platforms as they are provided for possible end-users and evaluating selected performance indicators, benchmarks support the decision for or against a system based on use-case requirements. Second, benchmarks may reveal opportunities for effective improvements of a system and can contribute to future development. Systems like the Heidelberg BrainScaleS project, IBM TrueNorth, the Manchester SpiNNaker project or the Intel Loihi platform drive the evolution of neuromorphic hardware implementations, while comparable benchmarks and corresponding measures are still rare. We show our methodology for comparing such diverse systems by applying a modular framework, with a user- centric view based on configurable spiking neural network descriptions

    An Abstract Model for Performance Estimation of the Embedded Multiprocessor CoreVA-MPSoC Technical Report (v1.0)

    Get PDF
    Ax J, Flasskamp M, Sievers G, Klarhorst C, Jungeblut T, Kelly W. An Abstract Model for Performance Estimation of the Embedded Multiprocessor CoreVA-MPSoC Technical Report (v1.0).; 2015

    Performance Estimation of Streaming Applications for Hierarchical MPSoCs

    Get PDF
    Flasskamp M, Sievers G, Ax J, et al. Performance Estimation of Streaming Applications for Hierarchical MPSoCs. In: Workshop on Rapid Simulation and Performance Evaluation: Methods and Tools (RAPIDO). New York, NY: ACM Press; 2016: 1

    ML4ProFlow: A Framework for Low-Code Data Processing from Edge to Cloud in Industrial Production

    No full text
    Klarhorst C, Quirin D, Hesse M, Rückert U. ML4ProFlow: A Framework for Low-Code Data Processing from Edge to Cloud in Industrial Production. In: 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA). 2022.One necessary part of Industry 4.0 is the availability and accessibility of data processing pipelines. This paper shows the ongoing development of ML4ProFlow, a framework that brings together the following parts: First, it provides the management of execution environments. Second, it specifies processing modules that focus on reusability and cross-platform usage. Third, it comes with a benchmarking automation to help developers implementing and analyzing modules and their combination. Those three integral parts of the framework are presented and the usability is shown

    Benchmarking Neuromorphic Hardware and Its Energy Expenditure

    No full text
    Ostrau C, Klarhorst C, Thies M, Rückert U. Benchmarking Neuromorphic Hardware and Its Energy Expenditure. Frontiers in Neuroscience. 2022;16: 873935.We propose and discuss a platform overarching benchmark suite for neuromorphic hardware. This suite covers benchmarks from low-level characterization to high-level application evaluation using benchmark specific metrics. With this rather broad approach we are able to compare various hardware systems including mixed-signal and fully digital neuromorphic architectures. Selected benchmarks are discussed and results for several target platforms are presented revealing characteristic differences between the various systems. Furthermore, a proposed energy model allows to combine benchmark performance metrics with energy efficiency. This model enables the prediction of the energy expenditure of a network on a target system without actually having access to it. To quantify the efficiency gap between neuromorphics and the biological paragon of the human brain, the energy model is used to estimate the energy required for a full brain simulation. This reveals that current neuromorphic systems are at least four orders of magnitude less efficient. It is argued, that even with a modern fabrication process, two to three orders of magnitude are remaining. Finally, for selected benchmarks the performance and efficiency of the neuromorphic solution is compared to standard approaches

    Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

    No full text
    Ostrau C, Homburg JD, Klarhorst C, Thies M, Rückert U. Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware. In: Artificial Neural Networks and Machine Learning – ICANN 2020. Springer International Publishing; 2020.With more and more event-based neuromorphic hardware systems being developed at universities and in industry, there is a growing need for assessing their performance with domain specific measures. In this work, we use the methodology of converting pre-trained non-spiking to spiking neural networks to evaluate the performance loss and measure the energy-per-inference for three neuromorphic hardware systems (BrainScaleS, Spikey, SpiNNaker) and common simulation frameworks for CPU (NEST) and CPU/GPU (GeNN). For analog hardware we further apply a re-training technique known as hardware-in-the-loop training to cope with device mismatch. This analysis is performed for five different networks, including three networks that have been found by an automated optimization with a neural architecture search framework. We demonstrate that the conversion loss is usually below one percent for digital implementations, and moderately higher for analog systems with the benefit of much lower energy-per-inference costs
    corecore